Goto

Collaborating Authors

 Kuala Lumpur



Japan and ASEAN agree to cooperate on AI development

The Japan Times

Japanese internal affairs minister Yoshimasa Hayashi (center) poses for a photo with ministers from ASEAN member states in Hanoi on Thursday. HANOI - Japan and the Association of Southeast Asian Nations have agreed to work together on developing new artificial intelligence models and preparing related laws. The AI-sector cooperation was included in a joint statement adopted at a meeting of digital ministers from Japan and ASEAN member states in Hanoi on Thursday. The statement was proposed by Japanese communications minister Yoshimasa Hayashi, who attended the meeting. Japan and ASEAN aim to join hands at a time when the United States and China are boosting their presence in the AI sector.


Malaysia suspends access to Musk's Grok AI

The Japan Times

Malaysia's tech regulator said on Sunday that the country suspended access to Elon Musk's chatbot Grok over AI-generated pornographic content. AFP-JIJI - Malaysia suspended access to Elon Musk's chatbot Grok over AI-generated pornographic content, the country's tech regulator said on Sunday. The decision follows global backlash after it emerged that Grok's image creation feature allowed users to sexualize pictures of women and children using simple text prompts. On Saturday Indonesia became the first country to deny all access to the tool, which has been restricted to paying subscribers elsewhere. The Malaysian Communications and Multimedia Commission said in a statement it had directed a temporary restriction on access to the Grok artificial intelligence for users in Malaysia with immediate effect. This action follows repeated misuse of Grok to generate obscene, sexually explicit, indecent, grossly offensive and non-consensual manipulated images, the regulator said.


Supporting Dynamic Agentic Workloads: How Data and Agents Interact

Giurgiu, Ioana, Nidd, Michael E.

arXiv.org Artificial Intelligence

The rise of multi-agent systems powered by large language models (LLMs) and specialized reasoning agents exposes fundamental limitations in today's data management architectures. Traditional databases and data fabrics were designed for static, well-defined workloads, whereas agentic systems exhibit dynamic, context-driven, and collaborative behaviors. Agents continuously decompose tasks, shift attention across modalities, and share intermediate results with peers - producing non-deterministic, multi-modal workloads that strain conventional query optimizers and caching mechanisms. We propose an Agent-Centric Data Fabric, a unified architecture that rethinks how data systems serve, optimize, coordinate, and learn from agentic workloads. To achieve this we exploit the concepts of attention-guided data retrieval, semantic micro-caching for context-driven agent federations, predictive data prefetching and quorum-based data serving. Together, these mechanisms enable agents to access representative data faster and more efficiently, while reducing redundant queries, data movement, and inference load across systems. By framing data systems as adaptive collaborators, instead of static executors, we outline new research directions toward behaviorally responsive data infrastructures, where caching, probing, and orchestration jointly enable efficient, context-rich data exchange among dynamic, reasoning-driven agents.


PromptTailor: Multi-turn Intent-Aligned Prompt Synthesis for Lightweight LLMs

Xu, Yizhou, Davis, Janet

arXiv.org Artificial Intelligence

Lightweight language models remain attractive for on-device and privacy-sensitive applications, but their responses are highly sensitive to prompt quality. For open-ended generation, non-expert users often lack the knowledge or time to consistently craft high-quality prompts, leading them to rely on prompt optimization tools. However, a key challenge is ensuring the optimized prompts genuinely align with users' original intents and preferences. We introduce PromptTailor, a system for controllable prompt generation for open-ended text that improves model output quality by intent-aligned prompt synthesis. PromptTailor expands minimal user instructions into rich, domain-aware prompts while preserving the user's stated preferences. The system is a quantized Llama3-8B model fine-tuned with a lightweight LoRA adapter on 12,300 prompt-refinement dialogues spanning 41 everyday domains, distilled from three stronger LLMs. The adapter attaches to any Llama3-8B base, enabling edge deployment. In human and LLM-judge evaluations across multiple target models and optimization baselines, PromptTailor yields higher preference rates than chain-of-thought prompting and matches or surpasses state-of-the-art prompt optimization methods while requiring fewer model calls (e.g., 3 vs. 9). These results show that a compact student, guided by powerful teachers, can learn effective prompt-generation strategies that enhance response quality while maintaining alignment with user intent.


CharCom: Composable Identity Control for Multi-Character Story Illustration

Wang, Zhongsheng, Lin, Ming, Lin, Zhedong, Shakib, Yaser, Liu, Qian, Liu, Jiamou

arXiv.org Artificial Intelligence

Ensuring character identity consistency across varying prompts remains a fundamental limitation in diffusion-based text-to-image generation. We propose CharCom, a modular and parameter-efficient framework that achieves character-consistent story illustration through composable LoRA adapters, enabling efficient per-character customization without retraining the base model. Built on a frozen diffusion backbone, CharCom dynamically composes adapters at inference using prompt-aware control. Experiments on multi-scene narratives demonstrate that CharCom significantly enhances character fidelity, semantic alignment, and temporal coherence. It remains robust in crowded scenes and enables scalable multi-character generation with minimal overhead, making it well-suited for real-world applications such as story illustration and animation.


Joint Semantic-Channel Coding and Modulation for Token Communications

Ying, Jingkai, Qin, Zhijin, Feng, Yulong, Wang, Liejun, Tao, Xiaoming

arXiv.org Artificial Intelligence

In recent years, the Transformer architecture has achieved outstanding performance across a wide range of tasks and modalities. Token is the unified input and output representation in Transformer-based models, which has become a fundamental information unit. In this work, we consider the problem of token communication, studying how to transmit tokens efficiently and reliably. Point cloud, a prevailing three-dimensional format which exhibits a more complex spatial structure compared to image or video, is chosen to be the information source. We utilize the set abstraction method to obtain point tokens. Subsequently, to get a more informative and transmission-friendly representation based on tokens, we propose a joint semantic-channel and modulation (JSCCM) scheme for the token encoder, mapping point tokens to standard digital constellation points (modulated tokens). Specifically, the JSCCM consists of two parallel Point Transformer-based encoders and a differential modulator which combines the Gumel-softmax and soft quantization methods. Besides, the rate allocator and channel adapter are developed, facilitating adaptive generation of high-quality modulated tokens conditioned on both semantic information and channel conditions. Extensive simulations demonstrate that the proposed method outperforms both joint semantic-channel coding and traditional separate coding, achieving over 1dB gain in reconstruction and more than 6x compression ratio in modulated symbols.